6 research outputs found

    The Future of Human-Artificial Intelligence Nexus and its Environmental Costs

    Get PDF
    The environmental costs and energy constraints have become emerging issues for the future development of Machine Learning (ML) and Artificial Intelligence (AI). So far, the discussion on environmental impacts of ML/AI lacks a perspective reaching beyond quantitative measurements of the energy-related research costs. Building on the foundations laid down by Schwartz et al., 2019 in the GreenAI initiative, our argument considers two interlinked phenomena, the gratuitous generalisation capability and the future where ML/AI performs the majority of quantifiable inductive inferences. The gratuitous generalisation capability refers to a discrepancy between the cognitive demands of a task to be accomplished and the performance (accuracy) of a used ML/AI model. If the latter exceeds the former because the model was optimised to achieve the best possible accuracy, it becomes inefficient and its operation harmful to the environment. The future dominated by the non-anthropic induction describes a use of ML/AI so all-pervasive that most of the inductive inferences become furnished by ML/AI generalisations. The paper argues that the present debate deserves an expansion connecting the environmental costs of research and ineffective ML/AI uses (the issue of gratuitous generalisation capability) with the (near) future marked by the all-pervasive Human-Artificial Intelligence Nexus

    What Can Artificial Intelligence Do for Scientific Realism?

    Get PDF
    The paper proposes a synthesis between human scientists and artificial representation learning models as a way of augmenting epistemic warrants of realist theories against various anti-realist attempts. Towards this end, the paper fleshes out unconceived alternatives not as a critique of scientific realism but rather a reinforcement, as it rejects the retrospective interpretations of scientific progress, which brought about the problem of alternatives in the first place. By utilising adversarial machine learning, the synthesis explores possibility spaces of available evidence for unconceived alternatives providing modal knowledge of what is possible therein. As a result, the epistemic warrant of synthesised realist theories should emerge bolstered as the underdetermination by available evidence gets reduced. While shifting the realist commitment away from theoretical artefacts towards modalities of the possibility spaces, the synthesis comes out as a kind of perspectival modelling

    Human Induction in Machine Learning: A Survey of the Nexus

    Get PDF
    As our epistemic ambitions grow, the common and scientific endeavours are becoming increasingly dependent on Machine Learning (ML). The field rests on a single experimental paradigm, which consists of splitting the available data into a training and testing set and using the latter to measure how well the trained ML model generalises to unseen samples. If the model reaches acceptable accuracy, an a posteriori contract comes into effect between humans and the model, supposedly allowing its deployment to target environments. Yet the latter part of the contract depends on human inductive predictions or generalisations, which infer a uniformity between the trained ML model and the targets. The paper asks how we justify the contract between human and machine learning. It is argued that the justification becomes a pressing issue when we use ML to reach ‘elsewheres’ in space and time or deploy ML models in non-benign environments. The paper argues that the only viable version of the contract can be based on optimality (instead of on reliability which cannot be justified without circularity) and aligns this position with Schurz’s optimality justification. It is shown that when dealing with inaccessible/unstable ground-truths (‘elsewheres’ and non-benign targets), the optimality justification undergoes a slight change, which should reflect critically on our epistemic ambitions. Therefore, the study of ML robustness should involve not only heuristics that lead to acceptable accuracies on testing sets. The justification of human inductive predictions or generalisations about the uniformity between ML models and targets should be included as well. Without it, the assumptions about inductive risk minimisation in ML are not addressed in full

    No-Regret Learning Supports Voters’ Competence

    Get PDF
    Procedural justifications of democracy emphasize inclusiveness and respect and by doing so come into conflict with instrumental justifications that depend on voters’ competence. This conflict raises questions about jury theorems and makes their standing in democratic theory contested. We show that a type of no-regret learning called meta-induction can help to satisfy the competence assumption without excluding voters or diverse opinion leaders on an a priori basis. Meta-induction assigns weights to opinion leaders based on their past predictive performance to determine the level of their inclusion in recommendations for voters. The weighting minimizes the difference between the performance of meta-induction and the best opinion leader in hindsight. The difference represents the regret of meta-induction whose minimization ensures that the recommendations are optimal in supporting voters’ competence. Meta-induction has optimal truth-tracking properties that support voters’ competence even if it is targeted by mis/disinformation and should be considered a tool for supporting democracy in hyper-plurality
    corecore